Matrix Eigen-decomposition via Doubly Stochastic Riemannian Optimization

نویسندگان

  • Zhiqiang Xu
  • Peilin Zhao
  • Jianneng Cao
  • Xiaoli Li
چکیده

Matrix eigen-decomposition is a classic and long-standing problem that plays a fundamental role in scientific computing and machine learning. Despite some existing algorithms for this inherently non-convex problem, the study remains inadequate for the need of large data nowadays. To address this gap, we propose a Doubly Stochastic Riemannian Gradient EIGenSolver, DSRG-EIGS, where the double stochasticity comes from the generalization of the stochastic Euclidean gradient ascent and the stochastic Euclidean coordinate ascent to Riemannian manifolds. As a result, it induces a greatly reduced complexity per iteration, enables the algorithm to completely avoid the matrix inversion, and consequently makes it well-suited to large-scale applications. We theoretically analyze its convergence properties and empirically validate it on real-world datasets. Encouraging experimental results demonstrate its advantages over the deterministic counterpart.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Fast Algorithm for Matrix Eigen-decompositionn

We propose a fast stochastic Riemannian gradient eigensolver for a real and symmetric matrix, and prove its local, eigengap-dependent and linear convergence. The fast convergence is brought by deploying the variance reduction technique which was originally developed for the Euclidean strongly convex problems. In this paper, this technique is generalized to Riemannian manifolds for solving the g...

متن کامل

Low-rank tensor completion: a Riemannian manifold preconditioning approach

We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint. A novel Riemannian metric or inner product is proposed that exploits the least-squares structure of the cost function and takes into account the structured symmetry that exists in Tucker decomposition. The specific metric allows to use the versatile framework of Riemannian opt...

متن کامل

Riemannian Optimization for Skip-Gram Negative Sampling

Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. However, the optimization of SGNS objective can be viewed as a problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework...

متن کامل

Some results on the symmetric doubly stochastic inverse eigenvalue problem

‎The symmetric doubly stochastic inverse eigenvalue problem (hereafter SDIEP) is to determine the necessary and sufficient conditions for an $n$-tuple $sigma=(1,lambda_{2},lambda_{3},ldots,lambda_{n})in mathbb{R}^{n}$ with $|lambda_{i}|leq 1,~i=1,2,ldots,n$‎, ‎to be the spectrum of an $ntimes n$ symmetric doubly stochastic matrix $A$‎. ‎If there exists an $ntimes n$ symmetric doubly stochastic ...

متن کامل

Matrix Eigen-decomposition via Doubly Stochastic Riemannian Optimization: Supplementary Material

Preparation First, based on the definitions of A t , Y t , ˜ Z t and Z t , we can write g t = G(s t , r t , X t) = p −1 st p −1 rt (I − X t X ⊤ t)(E st ⊙ A)(E ·rt ⊙ X) = (I − X t X ⊤ t)A t Y t. Then from (6), we have X t+1 = X t + α t g t W t − α 2 t 2 X t g ⊤ t g t W t. Since W t = (I + α 2 t 4 g ⊤ t g t) −1 = I − α 2 t 4 g ⊤ t g t + O(α 4 t), we get X t+1 = X t + α t A t Y t − α t X t X ⊤ t A...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016